本文介绍了一个学会通过听示例来唱新音乐的系统。它从输入音乐中提取排序规则,并使用这些规则来生成新的曲调,这些曲调由人声合成器演唱。我们开发了一种表示音乐作品规则作为量子电路的方法。我们声称这种音乐规则是量子本地的:它们在量子状态的幅度中自然可以编码。为了评估一条规则以生成后续事件,系统会动态构建相应的量子电路并测量它。在简短讨论了我们一直在实验的人声合成方法之后,本文通过一个实践示例介绍了我们的新颖生成音乐方法。本文展示了一些实验,并以讨论利用系统的创造潜力进行了讨论。
translated by 谷歌翻译
近年来,机器学习已被广​​泛采用以自动化音频混合过程。自动混合系统已应用于各种音频效应,例如增益调整,均衡和混响。可以通过视觉接口来控制这些系统,使用旋钮和语义描述符提供音频示例。使用语义描述符或文本信息来控制这些系统是艺术家传达其创意目标的有效方法。在本文中,我们探讨了使用单词嵌入代表语义描述符的新颖想法。通常通过在大型书面文本中培训神经网络来获得单词嵌入。这些嵌入是神经网络的输入层,以创建从单词到eq设置的翻译。使用此技术,机器学习模型还可以生成以前从未见过的语义描述符的EQ设置。我们将人类的EQ设置与神经网络的预测进行比较,以评估预测的质量。结果表明,嵌入层使神经网络能够了解语义描述符。我们观察到,具有嵌入层的模型的性能要比没有嵌入层的模型更好,但仍然不如人类标签。
translated by 谷歌翻译
用于音乐的人工智能(AI)的巨大进展,特别是对于音乐作品和访问大型数据库来通过互联网进行商业化。我们有兴趣进一步推进这一领域,专注于构成。与目前的黑盒AI方法相比,我们正在为生成音乐系统支持可解释的组成前景。特别是,我们正在从分布组成分类(Discocat)建模框架中导入方法,用于自然语言处理(NLP),由音乐语法激励。量子计算是一种新生的技术,它很可能及时影响音乐行业。因此,我们正在开创Quantum自然语言处理(QNLP)方法来开发新一代智能音乐系统。这项工作从Quantum Hardware上的孤立语言模型的先前实验实施中。在Quanthoven,曾经构建的第一概念证明,(a)表明可以编程量子计算机来学习对传送不同含义和(b)的音乐来说明这种能力如何可能会利用开发一个系统来组成有意义的音乐。在讨论当前对音乐的理解作为通信介质及其与自然语言的关系之后,本章侧重于开发的技术(a)编码音乐组合物作为量子电路,(b)设计量子分类器。章节以与系统创建的组合物的演示结束。
translated by 谷歌翻译
音频分割和声音事件检测是机器聆听中的关键主题,旨在检测声学类别及其各自的边界。它对于音频分析,语音识别,音频索引和音乐信息检索非常有用。近年来,大多数研究文章都采用分类。该技术将音频分为小帧,并在这些帧上单独执行分类。在本文中,我们提出了一种新颖的方法,叫您只听一次(Yoho),该方法受到计算机视觉中普遍采用的Yolo算法的启发。我们将声学边界的检测转换为回归问题,而不是基于框架的分类。这是通过具有单独的输出神经元来检测音频类的存在并预测其起点和终点来完成的。与最先进的卷积复发性神经网络相比,Yoho的F量的相对改善范围从多个数据集中的1%到6%不等,以进行音频分段和声音事件检测。由于Yoho的输出更端到端,并且可以预测的神经元更少,因此推理速度的速度至少比逐个分类快6倍。另外,由于这种方法可以直接预测声学边界,因此后处理和平滑速度约为7倍。
translated by 谷歌翻译
The Elo algorithm, due to its simplicity, is widely used for rating in sports competitions as well as in other applications where the rating/ranking is a useful tool for predicting future results. However, despite its widespread use, a detailed understanding of the convergence properties of the Elo algorithm is still lacking. Aiming to fill this gap, this paper presents a comprehensive (stochastic) analysis of the Elo algorithm, considering round-robin (one-on-one) competitions. Specifically, analytical expressions are derived characterizing the behavior/evolution of the skills and of important performance metrics. Then, taking into account the relationship between the behavior of the algorithm and the step-size value, which is a hyperparameter that can be controlled, some design guidelines as well as discussions about the performance of the algorithm are provided. To illustrate the applicability of the theoretical findings, experimental results are shown, corroborating the very good match between analytical predictions and those obtained from the algorithm using real-world data (from the Italian SuperLega, Volleyball League).
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
The distributed representation of symbols is one of the key technologies in machine learning systems today, playing a pivotal role in modern natural language processing. Traditional word embeddings associate a separate vector with each word. While this approach is simple and leads to good performance, it requires a lot of memory for representing a large vocabulary. To reduce the memory footprint, the default embedding layer in spaCy is a hash embeddings layer. It is a stochastic approximation of traditional embeddings that provides unique vectors for a large number of words without explicitly storing a separate vector for each of them. To be able to compute meaningful representations for both known and unknown words, hash embeddings represent each word as a summary of the normalized word form, subword information and word shape. Together, these features produce a multi-embedding of a word. In this technical report we lay out a bit of history and introduce the embedding methods in spaCy in detail. Second, we critically evaluate the hash embedding architecture with multi-embeddings on Named Entity Recognition datasets from a variety of domains and languages. The experiments validate most key design choices behind spaCy's embedders, but we also uncover a few surprising results.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Most research on task oriented dialog modeling is based on written text input. However, users interact with practical dialog systems often using speech as input. Typically, systems convert speech into text using an Automatic Speech Recognition (ASR) system, introducing errors. Furthermore, these systems do not address the differences in written and spoken language. The research on this topic is stymied by the lack of a public corpus. Motivated by these considerations, our goal in hosting the speech-aware dialog state tracking challenge was to create a public corpus or task which can be used to investigate the performance gap between the written and spoken forms of input, develop models that could alleviate this gap, and establish whether Text-to-Speech-based (TTS) systems is a reasonable surrogate to the more-labor intensive human data collection. We created three spoken versions of the popular written-domain MultiWoz task -- (a) TTS-Verbatim: written user inputs were converted into speech waveforms using a TTS system, (b) Human-Verbatim: humans spoke the user inputs verbatim, and (c) Human-paraphrased: humans paraphrased the user inputs. Additionally, we provided different forms of ASR output to encourage wider participation from teams that may not have access to state-of-the-art ASR systems. These included ASR transcripts, word time stamps, and latent representations of the audio (audio encoder outputs). In this paper, we describe the corpus, report results from participating teams, provide preliminary analyses of their results, and summarize the current state-of-the-art in this domain.
translated by 谷歌翻译
Can we leverage the audiovisual information already present in video to improve self-supervised representation learning? To answer this question, we study various pretraining architectures and objectives within the masked autoencoding framework, motivated by the success of similar methods in natural language and image understanding. We show that we can achieve significant improvements on audiovisual downstream classification tasks, surpassing the state-of-the-art on VGGSound and AudioSet. Furthermore, we can leverage our audiovisual pretraining scheme for multiple unimodal downstream tasks using a single audiovisual pretrained model. We additionally demonstrate the transferability of our representations, achieving state-of-the-art audiovisual results on Epic Kitchens without pretraining specifically for this dataset.
translated by 谷歌翻译